perm filename LOGIC[W87,JMC] blob
sn#835993 filedate 1987-03-07 generic text, type C, neo UTF8
COMMENT ā VALID 00005 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 \input memo.tex[let,jmc]
C00011 00003 \centerline{This draft of \jobname\ TEXed on \jmcdate\ at \theTime}
C00012 00004 logic[w87,jmc] Yet another try at Daedalus article
C00014 00005
C00019 ENDMK
Cā;
\input memo.tex[let,jmc]
\title{Logic in Artificial Intelligence}
%input from notebook 2/10/87
Artificial intelligence (AI) is a branch of computer science --
not of biology. It concerns methods for achieving goals in complex
circumstances. At present neither the circumstances nor the methods can be
characterized precisely, but it is easy to recognize many circumstances
that require intelligent behavior in people to achieve their goals, and
many of the intellectual methods people use have been identified, studied
mathematically and experimentally and programmed for computers.
AI has proved to be a difficult branch of science, because
intellectual mechanisms have been difficult to observe or to invent. It
requires new mathematics and new concepts. Because some of the most
important difficulties are conceptual, it is not possible to predict how
long it will be before there are computer programs with human level
intelligence -- maybe 5 years, maybe 500.
While intelligent behavior is a process and appropriately carried
out by a computer program, it requires information, and there are big
advantages in representing most information declaratively. Some of these
advantages are the following:
1. Facts are more modular than programs. They have meaning apart from their
uses and a fact can often be used in ways not forseen when it was acquired.
2. Facts are more readily communicated than procedures. Even a how-to-do-it
book consists almost entirely of facts.
3. Facts combine by inference to give new facts.
[McCarthy 1960] put forth these considerations and proposed that
common sense involved having available sufficiently obvious consequences
of what one learned about a specific situation from observation and
communication together with one's permanent general information about the
consequences of actions and other events.
A computer program that does declarative reasoning must be
provided with a language in which to express what it believes. Besides the
notation itself, speaking the language must also specify what inferences
are legitimate. When the reasoning is for the purpose of deciding how to
achieve goals, the program must also use {\it heuristic\/} information to
guide its search for a solution. Some of this heuristic information must
be built into the program, but it is important to express as much of it as
possible in a declarative form for the same reasons mentioned above for
representing information about the world declaratively. Indeed all
information possessed by the program should be represented declaratively,
although some must also be represented procedurally. Otherwise, there
wouldn't be a program to do anything.
The obvious candidates for representing declarative information
and for specifying the allowed inferences are languages of mathematical
logic. Leibniz proposed a {\it characteristica universalis\/} intended to
replace argument by calculation and both Boole and Frege proposed to use
their systems of mathematical logic for common sense reasoning as well as
mathematical reasoning. Using mathematical logic for common sense
reasoning offers dificulties beyond those present by reasoning in pure
mathematics, and most 20th century logicians either avoided the problem or
merely skirted its fringes.
\section{Representing Facts about Consequences of Events}
AI has used logic most extensively for representing
facts about the consequences of actions and other events.
\centerline{This draft of \jobname\ TEXed on \jmcdate\ at \theTime}
\centerline{Copyright \copyright\ \number\year\ by John McCarthy}
\vfill\eject\end
logic[w87,jmc] Yet another try at Daedalus article
Leibniz, Boole and Frege, three of the founders of mathematical
logic, all wanted to use it to express facts and correct reasoning
about the world and not merely for studying the foundations of
mathematics. This goal proved extremely elusive, and was
abandoned by almost all mathematical logicians and philosophers.
Thirty years ago the new discipline of artificial intelligence
began using logic to express an intelligent computer program's
goals, knowledge of the situation in which it must act and knowledge
of the effects of its available actions. The task has proved difficult,
but there seems to be no good alternative. A computer program's
deciding what to do increasingly takes the form of controlled logical
inference.
As always happens when mathematics is applied, the AI use of logic
has led to new concepts and problems off the track internally motivated
research in logic was following. We mention specifically non-monotonic
reasoning, logic programming, new approaches to modality and (somewhat
futuristically) the formalization of context.
\section{Epistemology and Heuristics}
The idea that computer programs are the appropriate vehicle for AI
was advanced by Turing (1950). In 1954 Newell and Simon began their work,
and their first program proved theorems in the logic of Russell and
Whitehead. Their objective was not only AI but also the psychology of
intelligence, their programs were designed to use the methods of college
students untrained in logic proving the same theorems. Using logic to
represent facts about the world was not their objective.
[McCarthy 1960] proposed using logic for common sense reasoning in
AI. While the idea was moderately well received at the time, and the LISP
programming language was invented for the purpose, difficulties arose and
progress was slower than was hoped, as was progress in all aspects of AI.
The popularity of logic in AI has had ups and downs, and the present is
up. The reasons include recent discoveries in using AI in logic and better
computers and programming languages.